linear programming
A Fast Convoluted Story: Scaling Probabilistic Inference for Integer Arithmetics
As illustrated by the success of integer linear programming, linear integer arithmetics is a powerful tool for modelling combinatorial problems. Furthermore, the probabilistic extension of linear programming has been used to formulate problems in neurosymbolic AI. However, two key problems persist that prevent the adoption of neurosymbolic techniques beyond toy problems. First, probabilistic inference is inherently hard, #P-hard to be precise. Second, the discrete nature of integers renders the construction of meaningful gradients challenging, which is problematic for learning. In order to mitigate these issues, we formulate linear arithmetics over integer-valued random variables as tensor manipulations that can be implemented in a straightforward fashion using modern deep learning libraries. At the core of our formulation lies the observation that the addition of two integer-valued random variables can be performed by adapting the fast Fourier transform to probabilities in the log-domain. By relying on tensor operations we obtain a differentiable data structure, which unlocks, virtually for free, gradient-based learning. In our experimental validation we show that tensorising probabilistic integer linear arithmetics and leveraging the fast Fourier transform allows us to push the state of the art by several orders of magnitude in terms of inference and learning times.
- North America > United States > Massachusetts (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Interior Point Solving for LP-based prediction+optimisation
Solving optimization problems is the key to decision making in many real-life analytics applications. However, the coefficients of the optimization problems are often uncertain and dependent on external factors, such as future demand or energy or stock prices. Machine learning (ML) models, especially neural networks, are increasingly being used to estimate these coefficients in a data-driven way.
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Belgium > Flanders (0.04)
- Energy (1.00)
- Banking & Finance > Real Estate (0.47)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > California > Riverside County > Palm Springs (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- (5 more...)
Supplementary Appendix for: Algorithms and Hardness for Learning Linear Thresholds from Label Proportions
Further, we have the following. Therefore, let us consider 0 p 1 / 2 . Differentiating w.r.t we obtain, @ ( p,) @ = (6 p 4) + 2(1 p), and, @ Observing that (1 / 3, 2 / 3) = 4 / 9 completes the analysis. Our hardness result is via a reduction from the Smooth-Label-Cover problem defined below. Theorem 4.1 directly follows from the following theorem.
Enforcing convex constraints in Graph Neural Networks
Rashwan, Ahmed, Briggs, Keith, Budd, Chris, Kreusser, Lisa
Many machine learning applications require outputs that satisfy complex, dynamic constraints. This task is particularly challenging in Graph Neural Network models due to the variable output sizes of graph-structured data. In this paper, we introduce ProjNet, a Graph Neural Network framework which satisfies input-dependant constraints. ProjNet combines a sparse vector clipping method with the Component-Averaged Dykstra (CAD) algorithm, an iterative scheme for solving the best-approximation problem. We establish a convergence result for CAD and develop a GPU-accelerated implementation capable of handling large-scale inputs efficiently. To enable end-to-end training, we introduce a surrogate gradient for CAD that is both computationally efficient and better suited for optimization than the exact gradient. We validate ProjNet on four classes of constrained optimisation problems: linear programming, two classes of non-convex quadratic programs, and radio transmit power optimization, demonstrating its effectiveness across diverse problem settings.
Interior Point Solving for LP-based prediction+optimisation
Solving optimization problems is the key to decision making in many real-life analytics applications. However, the coefficients of the optimization problems are often uncertain and dependent on external factors, such as future demand or energy or stock prices. Machine learning (ML) models, especially neural networks, are increasingly being used to estimate these coefficients in a data-driven way.
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Belgium > Flanders (0.04)
- Energy (1.00)
- Banking & Finance > Real Estate (0.47)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > United States > California (0.04)
- Europe > Netherlands (0.04)